Goto

Collaborating Authors

 probability theory




Bayesian symbolic regression: Automated equation discovery from a physicists' perspective

Guimera, Roger, Sales-Pardo, Marta

arXiv.org Machine Learning

Symbolic regression automates the process of learning closed-form mathematical models from data. Standard approaches to symbolic regression, as well as newer deep learning approaches, rely on heuristic model selection criteria, heuristic regularization, and heuristic exploration of model space. Here, we discuss the probabilistic approach to symbolic regression, an alternative to such heuristic approaches with direct connections to information theory and statistical physics. We show how the probabilistic approach establishes model plausibility from basic considerations and explicit approximations, and how it provides guarantees of performance that heuristic approaches lack. We also discuss how the probabilistic approach compels us to consider model ensembles, as opposed to single models.


Theory: Multidimensional Space of Events

Kavun, Sergii

arXiv.org Machine Learning

This paper extends Bayesian probability theory by developing a multidimensional space of events (MDSE) theory that accounts for mutual influences between events and hypotheses sets. While traditional Bayesian approaches assume conditional independence between certain variables, real-world systems often exhibit complex interdependencies that limit classical model applicability. Building on established probabilistic foundations, our approach introduces a mathematical formalism for modeling these complex relationships. We developed the MDSE theory through rigorous mathematical derivation and validated it using three complementary methodologies: analytical proofs, computational simulations, and case studies drawn from diverse domains. Results demonstrate that MDSE successfully models complex dependencies with 15-20% improved prediction accuracy compared to standard Bayesian methods when applied to datasets with high interdimensionality. This theory particularly excels in scenarios with over 50 interrelated variables, where traditional methods show exponential computational complexity growth while MDSE maintains polynomial scaling. Our findings indicate that MDSE provides a viable mathematical foundation for extending Bayesian reasoning to complex systems while maintaining computational tractability. This approach offers practical applications in engineering challenges including risk assessment, resource optimization, and forecasting problems where multiple interdependent factors must be simultaneously considered.


Recovering Event Probabilities from Large Language Model Embeddings via Axiomatic Constraints

Zhu, Jian-Qiao, Yan, Haijiang, Griffiths, Thomas L.

arXiv.org Artificial Intelligence

Rational decision-making under uncertainty requires coherent degrees of belief in events. However, event probabilities generated by Large Language Models (LLMs) have been shown to exhibit incoherence, violating the axioms of probability theory. This raises the question of whether coherent event probabilities can be recovered from the embeddings used by the models. If so, those derived probabilities could be used as more accurate estimates in events involving uncertainty. To explore this question, we propose enforcing axiomatic constraints, such as the additive rule of probability theory, in the latent space learned by an extended variational autoencoder (VAE) applied to LLM embeddings. This approach enables event probabilities to naturally emerge in the latent space as the VAE learns to both reconstruct the original embeddings and predict the embeddings of semantically related events. We evaluate our method on complementary events (i.e., event A and its complement, event not-A), where the true probabilities of the two events must sum to 1. Experiment results on open-weight language models demonstrate that probabilities recovered from embeddings exhibit greater coherence than those directly reported by the corresponding models and align closely with the true probabilities.


Coupling quantum-like cognition with the neuronal networks within generalized probability theory

Khrennikov, Andrei, Ozawa, Masanao, Benninger, Felix, Shor, Oded

arXiv.org Artificial Intelligence

The past few years have seen a surge in the application of quantum theory methodologies and quantum-like modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena such as order, conjunction, disjunction, and response replicability effects there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a quantum-like representation of networks of communicating neurons. This representation is not based on standard quantum theory but on generalized probability theory (GPT), with a focus on the operational measurement framework. Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect observables and state updates within the theory of measurement instruments a critical aspect of this model. This GPT based approach successfully reproduces key quantum-like effects, such as order, non-repeatability, and disjunction effects (commonly associated with decision interference). Moreover, this framework supports quantum-like modeling in medical diagnostics for neurological conditions such as depression and epilepsy. While this paper focuses primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a wide range of biological and social networks.


Improving Active Learning with a Bayesian Representation of Epistemic Uncertainty

Thomas, Jake, Houssineau, Jeremie

arXiv.org Artificial Intelligence

A popular strategy for active learning is to specifically target a reduction in epistemic uncertainty, since aleatoric uncertainty is often considered as being intrinsic to the system of interest and therefore not reducible. Yet, distinguishing these two types of uncertainty remains challenging and there is no single strategy that consistently outperforms the others. We propose to use a particular combination of probability and possibility theories, with the aim of using the latter to specifically represent epistemic uncertainty, and we show how this combination leads to new active learning strategies that have desirable properties. In order to demonstrate the efficiency of these strategies in non-trivial settings, we introduce the notion of a possibilistic Gaussian process (GP) and consider GP-based multiclass and binary classification problems, for which the proposed methods display a strong performance for both simulated and real datasets.


Yes, Prime Minister, question order does matter -- and it's certainly not classical! But is it quantum?

Brody, Dorje C.

arXiv.org Artificial Intelligence

In an episode of the satirical British political sitcom Yes, Prime Minister from the 1980s, Sir Humphrey Appleby once explained to Bernard Woolley (two of the characters) how it is possible to get contradictory polling results by asking a series of leading questions beforehand. The polling discussed in the episode concerns whether the public is for or against the reintroduction of national service. Recently, the leading questions outlined by Appleby were put to the public by the market research and polling giant Ipsos, the findings of which have been made public to raise awareness of the fact that people can be misled by means of a such questions [1]. The actual experiment conducted by Ipsos is explained on their web site: "Ipsos interviewed a representative quota sample of 2,158 adults aged 16-75 in Great Britain. Half saw the'Sample A' questions, reflecting a positive view about national service. Half saw'Sample B', reflecting a negative view."


The negation of permutation mass function

Tang, Yongchuan, Li, Rongfei

arXiv.org Artificial Intelligence

Negation is an important perspective of knowledge representation. Existing negation methods are mainly applied in probability theory, evidence theory and complex evidence theory. As a generalization of evidence theory, random permutation sets theory may represent information more precisely. However, how to apply the concept of negation to random permutation sets theory has not been studied. In this paper, the negation of permutation mass function is proposed. Moreover, in the negation process, the convergence of proposed negation method is verified. The trends of uncertainty and dissimilarity after each negation operation are investigated. Numerical examples are used to demonstrate the rationality of the proposed method.


Limit of the Maximum Random Permutation Set Entropy

Zhou, Jiefeng, Li, Zhen, Cheong, Kang Hao, Deng, Yong

arXiv.org Artificial Intelligence

The Random Permutation Set (RPS) is a new type of set proposed recently, which can be regarded as the generalization of evidence theory. To measure the uncertainty of RPS, the entropy of RPS and its corresponding maximum entropy have been proposed. Exploring the maximum entropy provides a possible way of understanding the physical meaning of RPS. In this paper, a new concept, the envelope of entropy function, is defined. In addition, the limit of the envelope of RPS entropy is derived and proved. Compared with the existing method, the computational complexity of the proposed method to calculate the envelope of RPS entropy decreases greatly. The result shows that when $N \to \infty$, the limit form of the envelope of the entropy of RPS converges to $e \times (N!)^2$, which is highly connected to the constant $e$ and factorial. Finally, numerical examples validate the efficiency and conciseness of the proposed envelope, which provides a new insight into the maximum entropy function.